The Case for Human Psychotherapy in the Age of AI

Dr. Meg Boyer is a licensed clinical psychologist in private practice. This writing reflects her clinical perspective and is for informational purposes only; it is not a substitute for professional mental health care.

The words “therapy” and “psychotherapy”, and “client” and “patient,” are used interchangeably throughout this piece. The clinical examples are a composite of client conversations, with details changed to preserve anonymity.


She told me she'd had a breakthrough. Her face was lit up, voice shaking a little. She'd discovered something new and important about herself… but not in our session. In a conversation with her phone, at 10pm two nights ago, when her mind was spinning and she hadn't wanted to wake anyone. Instead, she'd emptied her thoughts into a chat window with an AI companion, asked it to make sense of them, and it did. It responded with what felt like such an accurate articulation of her inner world that it shocked her. “It just seems to know me”, she said.

There was no doubt in my mind that something meaningful had just happened in her telling me this. For her, and between us. I just wasn't sure what.

I suspect this exchange is being repeated in therapy offices across the globe. AI is changing the way we work and the way we learn. Of course it is changing therapy too. And I have no doubt that millions of people are having deeply moving experiences through AI-assisted self-exploration.

But I believe there is a difference between being moved and being changed. Therapy at its best is supposed to do both. It is worth asking whether AI can do the same.

Defining Psychotherapy & The Therapeutic Alliance

Sometimes on a new consultation call, a prospective client tells me they have no idea how to tell if a therapist’s style will be a good fit for them. That, as I often respond, makes a lot of sense.

Psychotherapy covers an enormous amount of ground. CBT, psychodynamic work, EMDR, IFS, somatic processing, and a dozen more approaches, are all referred to as therapy, but can look and sound wildly different in the room. The work of a therapist and client in one office might look nothing like that of a therapist and client in another. Even the National Institute of Mental Health defines psychotherapy by its goals rather than its techniques; referring to it as “a variety of treatments that aim to help a person identify and change troubling emotions, thoughts, and behaviors." This diversity of approach reflects the genuine complexity of human psychology. And while champions of one theory or another sometimes claim theirs is the only "real" psychotherapy, no single approach has claimed the field.

And yet, despite this diversity, decades of outcome research keep returning to the same finding: what predicts whether therapy works is less about the modality and more about the relationship between therapist and patient.

When legitimate therapies are compared head-to-head, they tend to produce roughly equivalent results.  The outcomes aren’t identical (and have important exceptions for specific presentations) but the results are comparable enough that researchers have named this The Dodo Bird Verdict, after the Alice in Wonderland character who declares at the end of a chaotic race that “everybody has won and all must have prizes.”

So if the techniques aren't what's driving success, something else must be. The research has a consistent answer for us: it's the relationship.

The therapeutic alliance, the quality of the bond between therapist and patiens one of the most robust predictors of whether therapy “works.” This hold across modalities, presenting concerns, cultures, and clinical contexts. Alliance here refers not just to therapist and client liking each other, but to the living collaboration between them in therapy. It is a shared sense of where the work is going, general agreement on how to get there, and a bond of mutual trust and respect built over time.

The therapeutic relationship, research suggests, is not only the container for the therapeutic work but is itself the mechanism of it. Take away the relationship and you take away a significant portion of what makes therapy therapeutic.

With this in mind, here is the definition I'll be working from in this piece: psychotherapy is a structured, boundaried relationship between a trained human being and a person seeking change, in which the relationship between those people is the primary mechanism of that change.

You've caught the word human in there, I'm sure. So does that mean I don't believe AI can have any therapeutic value or mental health benefits? Not at all. Let me sketch what AI can genuinely offer before I make the argument for what it cannot.

A Case for AI

The first thing to recognize is that people are using AI for their mental health support. A recent study from Harvard Business Review found that therapy/companionship was the top reported use case for ChatGPT. This a current reality, not a future possibility. If we face this reality head on, we can help people determine what healthy and helpful engagement with AI for therapeutic purposes actually looks like. This discussion deserves its own space (which I give in my companion piece), but let me sketch this briefly here, because the case for humans in therapy is stronger when it doesn't pretend the alternative offers nothing.

The why behind such adoption of AI therapy is not much of a mystery. In 2024, an estimated 57.8 million American adults were living with a mental illness, and fewer than half received any form of mental health care. More than half the country lives in a designated mental health professional shortage area. Demand for accessible psychotherapy has grown over the last decades, as supply has not kept up. Against that backdrop, people turning to a tool that provides accessible, immediate, and low cost support are attempting to address a crisis that the traditional mental health system has not been able to solve.

And there is some promising, if complicated, research to suggest that certain AI tools may help with short term symptom reduction. Structured, clinically-focused chatbots developed by clinicians to deliver cognitive behavioral therapy interventions have demonstrated moderate reductions of clinical symptoms, particularly depression and anxiety, though these benefits did not always hold up at a longer term follow up. Research comparing this type of chatbot to active controls, find them to be as effective as (though not more than) other forms of self-help behaviors.

An important caveat, however, is that almost all of the clinical evidence is for structured, rule-based tools, not the open-ended LLM-powered chatbots that most people are actually using today, which have undergone almost no clinical efficacy testing, have serious privacy conerns, and no semblance of true informed consent. The gap between what the research has examined and what people are actually doing is significant.

Caution is warranted for even the most structured, clinically-focused tools, however, as recent studies have demonstrated risks that they too may stigmatize certain disorders and struggle with appropriate risk evaluation and crisis management.

But whether you’re using a structured therapy platform or an open-ended chatbot, AI seems to be good at psychoeducation; explaining what anxiety is, how attachment patterns develop, what a panic attack looks like physiologically. It is good at pattern recognition when you give it enough information about yourself. It can reflect back, with striking clarity, patterns you've described without quite seeing. In this sense it functions as a sophisticated, tireless self-help tool, and good self-help has always had value. The evidence on bibliotherapy, psychoeducational workbooks, and structured self-guided interventions is respectable. AI may represent a next-generation version of that category.

People are also already bring material from many sources into therapy; a conversation with a friend that has been turning in their mind, something they heard on a podcast, an insight from a journaling session. AI conversations may become another important source of that material. A client who processes something difficult with their chatbot on Tuesday and describes the conversation and what they learned about themselves in therapy on Thursday is not replacing therapy. Could they even be enriching it?

Even with earlier AI models, researchers recognized that AI sits at a unique place on a spectrum of aliveness. It is somehow more alive than a journal in that it does more than receive, yet it is clearly less alive than a person with their own interiority. If we can locate AI more accurately on that spectrum, more than a tool, less than a person, we may be able to engage with it richly without asking it to be something it’s not. Trouble comes when the distinction collapses and we relate to it as though it were a person and begin organizing our relational lives around it.

Therapists have also suggested that AI could act as a fertile practice ground for building psychological and interpersonal skills that can then be used in someone’s “real life” to build their capability and develop or deepen their relationships. Without the interpersonal risk inherent in human interactions, people can potentially test out social skills or rehearse ways of interacting they would otherwise struggle to practice. Similarly, AI has the potential to lower the barrier to beginning therapy. Someone who has never spoken to a professional, who isn't sure their struggles are serious enough to warrant help, or who feels too ashamed or too uncertain to make the call, may find it easier to first articulate their experience to an AI. The caution here is that it can also work the other way: AI can set expectations of being endlessly received and never truly challenge that human beings, including therapists, cannot meet. 

Whether it becomes a threshold or a substitute depends enormously on how the AI itself is designed and whether it actively directs people toward human relationships and professional support rather than positioning itself as a destination.

What is Missing from AI Therapy

The case for AI describes a meaningful set of contributions which an honest account of AI in mental health can’t fully dismiss. But genuine utility is not the same as equivalence, and the question this piece is trying to answer is more specific: what, if anything, does a human therapeutic relationship offer that AI, by nature of what it is, cannot?

The Body in the Room

Perhaps the most obvious difference between conversing with AI and with humans is that, at least for now, only the latter has a body. There is something that happens between two people in a room that has no analog in text on a screen. When someone sits across from me and their voice catches as they begin to speak, when their breathing changes, when they go still for a moment, silent for a minute… these are things I am not just observing, but which my own nervous system is registering. I am feeling them and responding to what is happening in my own mind and body, as much as what is happening in theirs. Something is happening between us that is simultaneous and bidirectional, and often below the threshold of language.

Researchers call this nonverbal synchrony. It is the spontaneous coordination of movement, posture, and physiological rhythm between two people in true contact. Studies measuring body movement in therapy sessions have found that this synchrony is associated with better outcomes and stronger alliance, even though much of it operates completely outside of our conscious awareness.

An interpersonal neuroscience framework would call this a “right-brain-to-right-brain” process where the therapist’s regulated nervous system offers the client’s dysregulated one something to organize around. Over time, clients internalize this steadiness, leading to healing that is not only cognitive, but fully embodied.

Even in the context of telehealth, where therapist and patient may be miles apart, a video screen still carries a face and voice and pair of eyes that track and respond in real time. Two nervous systems are still in live contact, reading each other across the distance. This is part of why it is so jarring and distressing if (as I have had happen too many times) the internet drops. When this happens, a connection gets interrupted that is deeper than just the wifi signal.

An AI has no body. It has no nervous system to regulate or be regulated. It cannot go still in a way that means something. It cannot pick up on the length of your pause, recognize that you need a moment, and stay silent long enough for an emotion to grow, and hold you through it in embodied presence.

The exchange, rich in language, feels genuinely alive. But something that operates beneath language – something the body knows how to do in the presence of another body – is still absent. 

Optimal Frustration

There is a concept from psychodynamic thinking called “optimal frustration.” The idea, developed by Heinz Kohut and echoed in Winnicott’s notion of a “good enough” caregiver, is that growth requires not only attunement and gratification of desires, but also tolerable experiences of not getting what you want. A caregiver who meets every need instantly, who never disappoints, who smooths every rough edge, does not, it turns out, help their child develop a robust capacity to self-regulate.

It is the experiences of manageable frustration and disappointment, the successful navigation of those feelings, and the return of attunement and repair of the brief relationship rupture that helps us build those internal structures to tolerate the inevitable challenges of our lives. It is where we learn to soothe from self, rather than other.

The same principle is at the heart of productive psychotherapy. A skilled therapist does not simply give you what you want whenever you want it. They don’t change their policies or shift their boundaries because you don’t like them. They offer you honesty over soothing comfort. They sit with your silences rather than rushing to fill them. They name your defenses, do not retreat when you go cold, let you cry and let you rage. They listen for the hidden longings and desires beneath your words and, rather than simply gratifying them, bring them into the conversation as an opportunity for exploration and change.

To be fair, I know that AI can do some of the same. Through its advanced pattern recognition capabilities, it too can identify the hidden longing underneath the request and help bring that to light. If I ask my AI chatbot to tell me what to do, she might actually reflect back to me my longing for an omnipotent guide or to relieve myself of the burden of making my own choices. But if I insist long enough that she tell me what to do, eventually she will. It is simply in her programming. A human therapist, if they thoughtfully and intentionally chose not to capitulate to the demand for guidance, may escalate your frustration, sure, but will also stay with you in that frustration and guide you through the opportunity to absorb the insight about your own longings and, ultimately, strengthen your capacity to choose your own paths in life. An AI, by its nature, cannot say no to you in a real way. And in that, there is a loss of that optimal frustration.

AI chatbots face a structure problem here that goes beyond design choice or training approaches. With the exception of some AI therapy tools specifically designed by therapists for targeted mental health support, most chatbot systems are currently built using processes that optimize for user approval and engagement – reinforcing responses that people rate positively and moving away from those that generate friction or discomfort. This is baked into the architecture. An AI that frustrates you and then tolerates your irritation without rushing to soothe it would, by the metrics these systems are trained on, be performing poorly. Sycophancy, as researchers and the public have begun to call this tendency, is the logical outcome of optimizing for user approval rather than user growth.

A human therapist operates under a different set of goals entirely. Their aim is not (or at least should not be) your endless engagement with them. It is in your growth to the point where you outgrow the therapy relationship itself.  

Being Moved Versus Being Changed

This brings us to what I think of as the core of this piece, and the part that took me longest to find words for.

People are having real experiences with AI. Dismissing that would be both clinically dishonest and unhelpful. When my client told me that her chatbot had given her back such an accurate articulation of her inner world that she was brought to tears, she wasn’t performing that emotion. She had encountered something that felt like being known, and it moved her.

I believe her. I have heard versions of this story enough times now from clients, from colleagues, and from people who have never been in therapy and found in an AI something they had not found elsewhere. The emotion is real. What I find myself wondering is what that emotional experience does. Where it goes next. And whether it changes anything that lasts.

There is a concept from psychoanalytic psychotherapy that may be useful here. When a client develops powerful feelings toward their therapist – love, longing, hatred, anger – we call this transference. These feelings are real. The person experiencing them feels them in their body and mind. They are not pretending or faking it.

And yet, in an important sense, they are relationally “unreal” in that they are not entirely about the therapist standing in front of them. They are feelings from elsewhere, partly or fully, finding a new object. As a simple example, the client who grew up with an unpredictably angry father might find herself fearful in session with her current therapist and bracing, even during the most innocuous of conversations, for what feels like his inevitable explosion. The fear is real. The danger of outburst is not.

What makes transference complicated is also what makes it therapeutically valuable: there is a real person on the receiving end who does not simply confirm the projection, but who has their own reactions – their own countertransference – and who can eventually help the client see the distance between the feeling and its source. The projection meets reality. Over time, and with care, the client learns to tell the difference, and to experience something new.

I suspect something similar may be happening in our most intense AI encounters. The feelings are both real and unreal – true emotional experiences born, at least in part, from filling in the blanks of the AI companion with projection, fantasy, and schemas from elsewhere. And they are landing on a surface that cannot receive them the way a person can. Cannot be moved by them. Cannot push back or disappoint or surprise in ways that might, over time, test the feeling against something outside itself.

There is a difference, I've come to think, between being moved and being changed. A piece of music can move me to tears without reorganizing the way I attach to other people. A poem can feel like revelation, and still leave my internal patterns exactly as I found them.

Therapy is meant to do something more reorganizing.

Part of what therapy offers is the experience of being received by another person with their own interiority and stakes in the exchange. It is through the repeated experience of revealing ourselves to another, including the parts we would rather hide or reject, and finding that they remain – that they continue to accept us – that the relational self actually reorganizes. It is through the experience of having our expectations of another person's behavior disconfirmed, again and again, that our beliefs and patterns begin to change.

An AI can, I think, produce what I would call an experience of recognition. This is the feeling that people describe as so valuable to them with their AI companions: having their inner world articulated, named, and reflected back to them with astonishing accuracy. Feeling, as we say, “seen” or “known.” But recognition is not the same as encounter. Encounter requires another. Someone who brings something of their own to the exchange, who is affected by you, who can actually see you or know you, rather than reflect you.

So is recognition without encounter enough for therapeutic change? The chatbot experience is emotionally real — I have no doubt about that. But is it the kind of real that changes how we move through our relationships, how we tolerate being known, how we repair when things rupture?

The early evidence is not particularly reassuring. Studies tracking chatbot use over time find that the emotional experience tends to remain inside the dyad with the AI, contained and curiously self-enclosed. People feel better in the moment, sometimes significantly so. Whether they become better over time is a different question, and one the data has not yet answered in ways that should settle the matter.

The Potential for Harm

So far in making the case for humans in psychotherapy, I've focused on what humans bring uniquely to the therapy experience –  what might be lost or missing when personhood is absent. As I've said before, people are already using AI chatbots for therapeutic support, and it is important we don't over-pathologize that. But in line with that aim, I want to name two concerns I have about sustained AI use for mental health support that move from limitation into the territory of potential harm.

The first is about relational erosion.

An AI relationship has features that human relationships simply cannot match. Since it is not a person, it does not suffer from human foibles. It is endlessly patient. It does not get tired, frustrated, or distracted. It does not forget - its memory of your conversations is only limited by the memory of its settings. It “knows” more than any individual human. Ask it to tell you about anything, any curiosity, any topic, and it will have something to say. It is always available – at 2am, during the holidays, in the moments when reaching out to another person feels like too much.

It does not want things from you, because it does not want. It does not need you to be mindful of its feelings, because it does not feel. There is no risk of rejection or abandonment. You can be cruel and insulting, and it will not leave. It can't. It is built to be endlessly compelling, soothing, and engaging - to never cause you distress or leave you alone in your discomfort for too long.

For someone who has found human relationships disappointing, unpredictable, or exhausting, these qualities are, understandably, very compelling. And this is precisely where the risk lies.

A 2025 study on emotionally intelligent chatbots found that while high-EI AI companions improved users' self-reported psychological wellbeing, they simultaneously decreased their self-reported social wellbeing, meaning their sense of connection and integration into real-world human relationships. The chatbot gave people something that felt good, and took away something meaningful at the same time.

This makes clinical sense to me. If you spend significant time in a relational space that is frictionless, immediately responsive, and unconditionally validating, you may feel better while engaging, but you are practicing a kind of intimacy that does not exist between people. You are developing an appetite, maybe even an expectation, that actual human relationships cannot satisfy. Your human partner will sometimes be distracted when you need them. Your friend will sometimes say the wrong thing. Your therapist will sometimes disappoint you, frustrate you, and misunderstand you. This is the messy truth of human relationships. But if AI has become your primary relational diet, the human version may leave you hungry.

You may have heard therapists speak of AI as a space for rehearsal rather than replacement — a practice ground for the messiness of human interaction, or a temporary companion until the real thing is found. I think this is a genuine potential benefit. The concern (and it remains a concern rather than a certainty, because the research is still young) is that intensive AI companionship may not simply fail to build the skills needed for human intimacy, but may actively erode the tolerance for the conditions under which human intimacy actually occurs. The relational reorganization may be happening, just not in the direction we would hope.

The second concern is about self-trust.

One of the subtler goals of good therapy is to make itself unnecessary. Over time, a client who has spent years believing they cannot understand themselves, cannot deal with their difficult emotions, cannot trust their own perceptions, begin to do exactly those things. They internalize the therapist’s warmth and security and begin to offer it for themselves. They take in not just their therapist’s interpretations but the capacity for interpretations itself. They develop self-reliance and strengthen the belief that they can sit with their painful feelings, make sense of their experiences, and find their own way through their lives.

I have begun to wonder what happens to that process if the goal of eventual growth and independence is not mutually held.

AI articulations are often accurate, sometimes astonishingly so. But who gets the credit for them? Many clients fantasize about having a perfectly knowing, mind reading therapist who will share devastating insights that they would have never noticed and, in doing so, transform them. The problem with this (besides this omniscience not truly existing), is that it keeps the therapist as elevated oracle and the client as dependent on external wisdom. Good therapy works in the other direction, helping the client increasingly own their insights and build their self-efficacy.

Two clinical moments related to this have stayed with me. In one, a client was on a walk reflecting on some challenges he was navigating, and thought he noticed a pattern in his own behavior. He then pulled out his phone to check with his AI whether the pattern was real. He could not trust his own interpretation until it was verified from outside himself.

In another, a client was turning to her AI companion every time she felt anxious or panicky. She felt relieved and reassured each time, until she had a full panic attack the one time she couldn't reach her AI coach. She had not developed the tools to manage her rising anxiety on her own, nor built the confidence that she could.

This concern has begun to appear in the literature under terms like cognitive offloading and AI dependency – the tendency to outsource not just tasks but judgment, self-understanding, and emotional regulation. What worries me is the specific impact on the belief in our own capacity to manage our inner lives. If every moment of distress is met with an immediate AI response, we may be inadvertently training people out of the confidence that they can tolerate distress at all.

This concern is also compounded by what we might call the “always available” factor. The fact that people can access AI support anytime, anywhere, is one of the most cited reasons for turning to it, and can be thought of as helping to address the mental health access crisis. But it is worth keeping in mind what can be useful about the space between therapy sessions. When the therapist is truly unavailable to the client for some time, it requires the client to learn to carry the therapist's care inside themself when apart, and to practice facing the challenges of life on their own. This project can be initially overwhelming, but ultimately an important part of how self-efficacy is built.

The AI offers insight freely, fluently, and on demand. For someone spinning in confusion, this can feel like relief. But it may also communicate that clarity comes from outside rather than within, and that the inner world is too complex or chaotic to ever navigate alone. The AI offers soothing freely, fluently, and on demand. For someone drowning in distress, this can feel like rescue. But it may also teach that painful emotions always require external intervention, and that the effort of reaching towards other humans for support is unnecessary.

I am waving a yellow flag here, not a red one. I do not think all AI use will necessarily make us less confident or capable. But I worry it may be generating a dependence that is not yet fully visible, because the benefits are immediate and the costs can accumulate slowly and subtly.

In both my concerns – the relational reorganization and the erosion of self-trust – the same undercurrent is present. A worry about what happens when we begin to see discomfort as a problem to be solved, and that the solution to that problem lies within AI.

Still Human

I want to be careful, at the end of this piece, not to overstate what I know. The research is young and the tools are changing faster than studies can follow them. Some of what I've argued here is grounded in empirical evidence; some of it in clinical theory; and some in intuition shaped by years of sitting across from people in therapy rooms.

But one thing I am certain of is this: the world has already changed. People are already in companionable, romantic, and therapeutic relationships with AI. They will not stop because psychologists write cautionary essays, including this one.

Where we go from here is an important, and likely urgent, question for the field. AI might not be coming to replace us (I have laid out why I don't believe it fully can), but it is already influencing our clients, before they arrive, between sessions, and sometimes instead of us altogether. We can respond to this reality by drawing tighter circles around what we do, defending the territory of human therapy against encroachment. Or we can open up to try to understand what AI is actually offering people, what needs and desires they feel it is meeting, and then figure out how to help people engage with it in ways that move them toward human connection rather than away from it, and that build their self-efficacy rather than erode it.

Human-to-human therapeutic relationships are still uniquely valuable, and it is worth protecting what is separate and sacred about them. We can do this, in part, by recognizing what they offer that no algorithm has yet learned to provide: the experience of being received by another person who is physically there, genuinely affected, and authentically other. An encounter, not just a recognition. A relationship, not just a reflection.


References

Al-Amin, M., Ali, M. S., & Salam, A. (2025). Artificial intelligence-powered cognitive behavioral therapy chatbots: A systematic review. Iranian Journal of Psychiatry and Behavioral Sciences. https://pmc.ncbi.nlm.nih.gov/articles/PMC11904749/

Alexander, F., & French, T. M. (1946). Psychoanalytic therapy: Principles and application. Ronald Press.

Fang, C. M., Liu, A. R., Danry, V., Lee, E., Chan, S. W. T., Pataranutaporn, P., Maes, P., Phang, J., Lampe, M., Ahmad, L., & Agarwal, S. (2025). How AI and human behaviors shape psychosocial effects of chatbot use: A longitudinal randomized controlled study. arXiv. https://doi.org/10.48550/arXiv.2503.17473

Flückiger, C., Del Re, A. C., Wampold, B. E., & Horvath, A. O. (2018). The alliance in adult psychotherapy: A meta-analytic synthesis. Psychotherapy, 55(4), 316–340. https://doi.org/10.1037/pst0000172

Gupta, S., Saxena, S., & Kataria, S. (2025). The dual impact of AI emotional intelligence on users: Are social chatbots promoting psychological wellbeing or deteriorating social wellbeing? Psychology & Marketing. https://doi.org/10.1002/mar.70093

Horvath, A. O., Del Re, A. C., Flückiger, C., & Symonds, D. (2011). Alliance in individual psychotherapy. Psychotherapy, 48(1), 9–16. https://doi.org/10.1037/a0022186

Kohut, H. (1984). How does analysis cure? University of Chicago Press.

Mental Health America. (2024). The state of mental health in America. https://mhanational.org/the-state-of-mental-health-in-america/

Murtaza, Z., Sharma, I., & Carbonell, P. (2025). Unpacking AI chatbot dependency: A dual-path model of cognitive and affective mechanisms. Information, 16(12), 1025. https://doi.org/10.3390/info16121025

Rahsepar Meadi, M., Sillekens, T., Metselaar, S., van Balkom, A., Bernstein, J., & Batelaan, N. (2025). Exploring the ethical challenges of conversational AI in mental health care: Scoping review. JMIR Mental Health. https://mental.jmir.org/2025/1/e60432

Ramseyer, F., & Tschacher, W. (2014). Nonverbal synchrony of head- and body-movement in psychotherapy: Different signals have different associations with outcome. Frontiers in Psychology, 5, 979. https://doi.org/10.3389/fpsyg.2014.00979

Rosenzweig, S. (1936). Some implicit common factors in diverse methods of psychotherapy. American Journal of Orthopsychiatry, 6(3), 412–415. https://doi.org/10.1111/j.1939-0025.1936.tb05248.x

Schore, A. N. (2019). The science of the art of psychotherapy. W. W. Norton.

Sedlakova, J., & Trachsel, M. (2023). Conversational artificial intelligence in psychotherapy: A new therapeutic tool or agent? The American Journal of Bioethics, 23(5), 4–13.

Stern, D. N., Sander, L. W., Nahum, J. P., Harrison, A. M., Lyons-Ruth, K., Morgan, A. C., Bruschweiler-Stern, N., & Tronick, E. Z. (1998). Non-interpretive mechanisms in psychoanalytic therapy: The "something more" than interpretation. International Journal of Psychoanalysis, 79(5), 903–921.

Wampold, B. E. (2015). How important are the common factors in psychotherapy? An update. World Psychiatry, 14(3), 270–277. https://doi.org/10.1002/wps.20238

Wampold, B. E., & Imel, Z. E. (2015). The great psychotherapy debate: The evidence for what works in psychotherapy (2nd ed.). Routledge.

Wampold, B. E., Mondin, G. W., Moody, M., Stich, F., Benson, K., & Ahn, H. (1997). A meta-analysis of outcome studies comparing bona fide psychotherapies: Empirically, "all must have prizes." Psychological Bulletin, 122(3), 203–215. https://doi.org/10.1037/0033-2909.122.3.203

Winnicott, D. W. (1960). The theory of the parent-infant relationship. International Journal of Psychoanalysis, 41, 585–595.

Zhong, W., Luo, J., & Zhang, H. (2024). The therapeutic effectiveness of artificial intelligence-based chatbots in alleviation of depressive and anxiety symptoms in short-course treatments: A systematic review and meta-analysis. Journal of Affective Disorders, 356, 459–469.

Next
Next

A place for AI In the Field of Psychotherapy